5,683 research outputs found

    On critical normal sections for two-dimensional immersions in R^n and a Riemann-Hilbert problem

    Get PDF
    For orthonormal normal sections of two-dimensional immersions in R^4 we define torsion coefficients and a functional for the total torsion. We discuss normal sections which are critical for this functional. In particular, a global estimate for the torsion coefficients of a critical normal section in terms of the curvature of the normal bundle is provided

    Optimal Pricing and Quality of Academic Journals and the Ambiguous Welfare Effects of Forced Open Access: A Two-sided Model

    Get PDF
    We analyse optimal pricing and quality of a monopolistic journal and the optimality of open access in a two-sided model. The predominant aspect of the model that determines the quality levels at which open access is optimal is the nature of the (non-linear) externalities between readers and authors in a journal. We show that there exist scenarios in which open access is a feature of high-quality journals. Besides, we find that the removal of copyright (and thus forced open access) will likely increase both readership and authorship, will decrease journal profits, and may increase social welfare

    Multilateral Stability and Efficiency of Trade Agreements: A Network Formation Approach

    Get PDF
    We study the endogenous network formation of bilateral and multilateral trade agreements by means of hypergraphs and introduce the equilibrium concept of multilateral stability. We consider multi-country settings with a firm in each country that produces a homogeneous good and competes as a Cournot oligopolist in each market. Under endogenous tariffs, we find that the existence of a multilateral trade agreement is always necessary for the stability of the trading system and that the formation of preferential trade agreements is always necessary for achieving global free trade. We also find that global free trade is efficient but not necessarily the only multilaterally stable trade equilibrium when countries are symmetric (heterogeneous) in terms of market size. We derive conditions under which such a conflict between overall welfare efficiency and stability occurs

    Loan availability and investment: Can innovative companies better cope with loan denials?

    Get PDF
    This study examines the consequences of loan denials for the investment performance of small and medium-sized German enterprises. As a consequence of a loan denial, innovative companies experience a smaller drop in the share of actual to planned investment than non-innovative companies. The non-randomness of loan denials is controlled for with a selection equation employing the intensity of banking competition at the district level as an exclusion restriction. We can explain the better performance of innovative companies by their ability to increase the use of external equity financing, such as venture capital or mezzanine capital, when facing a loan denial. --Investment,loan availability,innovation,private equity

    Exploiting Data Representation for Fault Tolerance

    Full text link
    We explore the link between data representation and soft errors in dot products. We present an analytic model for the absolute error introduced should a soft error corrupt a bit in an IEEE-754 floating-point number. We show how this finding relates to the fundamental linear algebra concepts of normalization and matrix equilibration. We present a case study illustrating that the probability of experiencing a large error in a dot product is minimized when both vectors are normalized. Furthermore, when data is normalized we show that the absolute error is less than one or very large, which allows us to detect large errors. We demonstrate how this finding can be used by instrumenting the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase, and show that when scaling is used the absolute error can be bounded above by one

    Evaluating the Impact of SDC on the GMRES Iterative Solver

    Full text link
    Increasing parallelism and transistor density, along with increasingly tighter energy and peak power constraints, may force exposure of occasionally incorrect computation or storage to application codes. Silent data corruption (SDC) will likely be infrequent, yet one SDC suffices to make numerical algorithms like iterative linear solvers cease progress towards the correct answer. Thus, we focus on resilience of the iterative linear solver GMRES to a single transient SDC. We derive inexpensive checks to detect the effects of an SDC in GMRES that work for a more general SDC model than presuming a bit flip. Our experiments show that when GMRES is used as the inner solver of an inner-outer iteration, it can "run through" SDC of almost any magnitude in the computationally intensive orthogonalization phase. That is, it gets the right answer using faulty data without any required roll back. Those SDCs which it cannot run through, get caught by our detection scheme

    Resilience in Numerical Methods: A Position on Fault Models and Methodologies

    Full text link
    Future extreme-scale computer systems may expose silent data corruption (SDC) to applications, in order to save energy or increase performance. However, resilience research struggles to come up with useful abstract programming models for reasoning about SDC. Existing work randomly flips bits in running applications, but this only shows average-case behavior for a low-level, artificial hardware model. Algorithm developers need to understand worst-case behavior with the higher-level data types they actually use, in order to make their algorithms more resilient. Also, we know so little about how SDC may manifest in future hardware, that it seems premature to draw conclusions about the average case. We argue instead that numerical algorithms can benefit from a numerical unreliability fault model, where faults manifest as unbounded perturbations to floating-point data. Algorithms can use inexpensive "sanity" checks that bound or exclude error in the results of computations. Given a selective reliability programming model that requires reliability only when and where needed, such checks can make algorithms reliable despite unbounded faults. Sanity checks, and in general a healthy skepticism about the correctness of subroutines, are wise even if hardware is perfectly reliable.Comment: Position Pape
    corecore